I'm slowly approaching halfway through this first module and I can't believe how far I've come already! I also meant to send this out a bit earlier today but I fell asleep - whoopsie!
Last weeks blog was delayed due to not very much interesting happening but also struggling to meet a few deadlines. The topics we covered in the last two weeks were IT Risk Management and Cybersecurity which definitely have some interesting elements but are quite technical and slightly dull. So I'm going to go back to another topic I discussed with a peer in the discussion posts this week. Finally, I will take you through some thoughts I have on imposter syndrome. Next week I will be talking about group projects!
Can Artificial Intelligence aid Medicine?
The short answer is yes - definitely. That's a wrap!
Unfortunately, medicine, similar to law enforcement, is a topic that cannot be discussed without discussing ethics and social impact. Below is a reply to a post on the topic.
“Artificial intelligence techniques have the potential to be applied in almost every field of medicine” (Ramesh et al., 2004, p.334) and the potential of some models to learn from historical examples makes it an invaluable tool for medicine (Ramesh et al., 2004). However, there is still more research required for AI to be used across the healthcare industry.
Blocking its path are concerns about the privacy of data. Health data is extremely valuable but is also intimately private to every individual and there is an understandable inclination for people to be wary (Bartoletti, 2019). Data can be compromised. In July of this year, a data breach in Madrid by the Ministry of Health affected 100,000 people, which was not only private information but individually identifiable (Mckenzie, 2021). The data did not appear to be made public but casts doubt on the competency of data protection all the same. From a legal perspective, the more prevalent AI becomes, the more opportunities there are for vulnerabilities to be exposed. Health care providers are liable for data privacy and security and thus, using large amounts of data in AI models, opens them up to more exposure to more potential risk. (Davis, Francois, and Murray, 2021). Hence, concerns about the safety of data are not unjustified.
In the future of AI and healthcare, there are ethical concerns about health professionals being taught to work with AI technology. Doctors would be required to work alongside machines, and there is pushback from practitioners regarding technology playing a part in the decision-making process (Ramesh et al., 2004). If an AI is saying a result and a doctor does not agree – how does the doctor trust the machine decision and not interfere and further, if the doctor does or does not – where does the responsibility lie if something goes wrong (Bartoletti, 2019)? In healthcare, the stakes can be extremely high, and the consequences of something going wrong could be fatal. There would need to be new training and guidance for this practice in the future, which could be difficult to execute if health professionals do not believe in the AI models.
In conclusion, AI can be applied in all kinds of healthcare and no doubt has the potential to revolutionise the industry. However, because of the sensitive nature of the data, and the risks if something goes wrong, it will be difficult to develop and roll out these technologies quickly.
Here is another excerpt from a summary report that I wrote this week.
Another initially presented issue was if practitioners depend on the wrong information this could lead to mistakes in a patient's care. In the context of a worker choosing between two mismatched options, one presented by a computer and one handwritten, the phenomenon of automation bias is "the tendency to favour or give greater credence to information supplied by technology" (Grissinger, 2019 p.320). There is often an overestimation of performance because people believe technology is more likely to be correct than humans. (Grissinger, 2019). The same ideas can apply here and in order to overcome this potential over-reliance on technology, practitioners must question the reliability and implement monitoring and verification strategies to minimise errors (Grissinger, 2019). Finally, thorough testing and safety measures will need to be a vital component of the development of these technologies.

To add to these arguments, the unfortunate reality is that an AI can be wrong, and machines making medical decisions may not always make the right choice. This is defined by the data we use to program them and how we decide the machine will make the decision. An obvious issue with this is humans get things wrong. They can be biased, ignorant or worse intentionally destructive. Thus, it is important that ethical laws are passed to protect the people who are affected when things go wrong. Things must be tested, we need proper regulation and we need to maintain our current standard of training. This all sounds like quite a lot. But...
We have to also bear in mind that humans get it wrong all the time. Heck, they make TV shows out of it. It does not bear thinking about how many times a human-made a call in a tight spot that turned out to be the wrong one. There are risks and dangers with AI, but it cannot be ignored how many benefits it can bring to the field of medicine. A computer can look at all cases of a disease in the world, compare and contrast and simply know the correct answer based on fair more history and experience than a human can. And beyond that, imagine a world where your health is assessed without the opinion or judgement of a random human stranger.
I'll leave you with a few questions:
- Would you trust a computer to make your medical decisions?
- How do you think an objective computer could have made something in your medical history better?
- Do you think doctors would ever be able to work alongside computers?
Imposter Syndrome
To grab a definition from Google:
Imposter syndrome can be defined as a collection of feelings of inadequacy that persist despite evident success. 'Imposters' suffer from chronic self-doubt and a sense of intellectual fraudulence that override any feelings of success or external proof of their competence.

This feeling for me is an extremely common one, throughout my entire education and career. No matter how hard I try, or how much I succeed I will mostly dismiss it as not enough, or get this overwhelming feeling that I am somehow "gaming the system". Perhaps you have experienced this feeling as well. Where you are sat waiting for someone to point at you and say "Hey! I've caught you!!!".
I normally have a fairly decent handle on these feelings as I've been in the working world a long time. I can reassert that I know I'm not stupid and that I have decent skills that are useful to people and I'm worth the money I get paid etc. But this has been quite a bit tougher to push down these last few weeks with the university course.
It particularly came to a head these weeks, where the public discussion forums have started to take their toll and the seminar experience I mentioned in my last blog. It all starts to get a bit overwhelming having people directly judge my academic thoughts.
When I got my mark back for my essay back in August, I was ecstatic. I felt like I had worked hard and I genuinely impressed myself having never written one before! It was truly an awesome feeling. Unfortunately, this essay was a practise essay so the mark didn't count BUT I was happy at least I knew I could get the grade.
Last week, I got my first mark that will actually count and it is the highest grade I have ever received.
But this time, I was filled with doubts and fears as I read it...the best grade I have ever gotten...and I felt nothing except anxiety. When were they going to find out that this is all a lie? That's it's a mistake?! This is what imposter syndrome does, it worms into your brain and convinces you that your skills are a lie. Even now as I type this I feel worried it's all a lie.
This is not something I have the solution to yet and it may take a lot more time to get over it as I study. I think the best I can do is try to celebrate the small things in life. Alongside this blog, I try to keep a regular gratitude journal that takes about 5 minutes to fill in. I thought I'd include one entry here:
What is the best thing that happened today?
I managed to finalise a huge chunk of university work which just a week ago felt like a huge mountain as well as take some time to myself to relax.
What is something you are grateful for in your life?
Blair - she gave me some extremely good advice today
What are three small things you are most grateful for today?
- Edd doing a silly dance
- Coffee with cake from Dad
- My new floral stemless wine glasses from urban outfitters
What are you grateful for today?
References
Bartoletti, I. (2019) ‘AI in Healthcare: Ethical and Privacy Challenges’. In Riaño, D., Wilk, S. and Teije, A. (ed.) Artificial Intelligence in Medicine. Poland: Springer, Cham. pp.7-10.
Davis, W. K., Francois, A. and Camin Murray, C. (2021) ‘Top 10 Legal Considerations for the Development and Use of Artificial and Use of Artificial Intelligence in Healthcare’, Health Lawyer, 33(5), pp. 50–55.
Grissinger, M. (2019). ‘Understanding Human Over-Reliance On Technology’. P&T, 44(6), pp. 320–375.
Mckenzie, K. (2021). Ministry of health suffers data breach exposing details of 100,000 people in Madrid - “including king and Sanchez.” Olive Press News Spain. Available at: https://www.theolivepress.es/spain-news/2021/07/09/ministry-of-health-suffers-data-breach-exposing-details-of-100000-people-in-madrid-including-king-and-sanchez/ (Accessed: 6 Sep. 2021).
Ramesh, A.N., Kambhampati, C., Monson, J.R. and Drew, P.J., (2004) ‘Artificial Intelligence in Medicine' Annals of The Royal College of Surgeons of England, 86(5), pp.334-337.
|